constructive suggestion
positive feedback, and greatly appreciate the critical and constructive suggestions
Thank you for your valuable feedback, which is very helpful in improving the paper. We're encouraged by the broadly "Put this in the context of other work on computational homogenization / multi-scale finite element Our method is related to these and the boundary element method (BEM). "Limitation associated with micro-scale buckling... the coarse-grain behavior might exhibit hysteretic effects": Good "How sensitive is the outer optimization to the accuracy of the surrogate gradients?" "Do you know how the CES method scales with system size in terms of accuracy and evaluation time": In terms of "the method to solve the outer optimization over BCs to find minimum energy solutions to the composed surrogates Free DoFs are optimized to minimize total predicted energy using LBFGS. "The discuss of the surrogate and i.i.d. "Are the BCs shared when a boundary is common between two cells": Y es. We have 1 DoF for each blue point in Fig 2. "Its not clear how the HMC and PDE solver are used together": HMC is used to generate training BCs, preferring larger The PDE solver is used to compute the gradient of the pdf (which depends on E) w.r.t. the BC. Given BCs, we run the solver to determine the internal u and E. We compute dE/dBC with the Then we use this to compute the gradient of the pdf w.r.t. the BCs, needed for the leapfrog step. "does the HMC require a significant burn-in time before producing reasonable samples": No. Note: we don't truly care Per appendix, HMC took between 3 and 100 leapfrog steps per sample. The process of using the surrogates to solve the original problem can be explained in more detail. Newton method is neither the fast nor the most stable... a comparison with more sophisticated methods would be From a brief look it looks like Liu et al's method is tailored for Reviewer 5: "There is one outlier in L2 compression that was quite bad": We will discuss this in the main paper. "A comment might help the reader situate this work within the more usual (less idyllic) context of approximating This is a good suggestion: we will relate to other work in learning energies.
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions. Chimera supports the switching between BFV and TFHE, while Glyph enables the switching between BGV and TFHE. Some users may not have such large network bandwidth. In contrast, Glyph first trains a CNN network model by a plaintext public dataset. Except sending the encrypted input data, the training of Glyph does not involve the client.
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions. Computation (MPC), HE supports non-interactive operations and greatly reduces the communication cost. One of our baselines CHET [9] claims it has better performance than prior works like E2DM. We will add E2DM into related work and compare it against our Falcon in the revised version. Falcon uses non-interactive HE setting.
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions. Computation (MPC), HE supports non-interactive operations and greatly reduces the communication cost. One of our baselines CHET [9] claims it has better performance than prior works like E2DM. We will add E2DM into related work and compare it against our Falcon in the revised version. Falcon uses non-interactive HE setting.
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions
We thank the reviewers for their careful reading of the manuscript and their constructive suggestions. Chimera supports the switching between BFV and TFHE, while Glyph enables the switching between BGV and TFHE. Some users may not have such large network bandwidth. In contrast, Glyph first trains a CNN network model by a plaintext public dataset. Except sending the encrypted input data, the training of Glyph does not involve the client.